Formulas

Author

Salman Ahmadi-Asl

Published

January 29, 2026

Elementary Matrices

  • Inverse of Row Swap: \(P_{ij}^{-1} = P_{ij}\)
  • Inverse of Row Scaling: \(D_i(\alpha)^{-1} = D_i(1/\alpha)\)
  • Inverse of Row Addition: \(E_{ij}(\alpha)^{-1} = E_{ij}(-\alpha)\)
  • Determinant of Row Swap: \(\det(P_{ij}) = -1\)
  • Determinant of Row Scaling: \(\det(D_i(\alpha)) = \alpha\)
  • Determinant of Row Addition: \(\det(E_{ij}(\alpha)) = 1\)
  • Invertible Matrix Factorization: \(A = E_1 E_2 \cdots E_k\), so \(A^{-1} = E_k^{-1} \cdots E_1^{-1}\)

LU Decomposition

  • LU Decomposition: \(A = LU\) (\(L\) unit lower triangular, \(U\) upper triangular)
  • Existence Condition: All leading principal minors \(M_k \neq 0\)
  • LU with Partial Pivoting: \(PA = LU\) (exists for any invertible \(A\))
  • LU with Complete Pivoting: \(PAQ = LU\) (exists for any matrix \(A\))
  • LU Factorization Cost: \(\frac{2}{3}n^3\) flops
  • Forward/Back Substitution Cost: \(2n^2\) flops per solve

Triangular System Solving

  • Forward Substitution: \[x_i = \frac{1}{l_{ii}}\left(b_i - \sum_{j=1}^{i-1} l_{ij}x_j\right)\]
  • Back Substitution: \[x_i = \frac{1}{u_{ii}}\left(b_i - \sum_{j=i+1}^{n} u_{ij}x_j\right)\]

LDU and Special Decompositions

  • LDU Decomposition: \(A = LDU_0\) where \(D = \text{diag}(u_{11}, \ldots, u_{nn})\), \(U_0 = D^{-1}U\)
  • Symmetric LDU: \(A = LDL^T\) (when \(A = A^T\))
  • Cholesky Decomposition: \(A = LL^T\) (when \(A\) is symmetric positive definite, \(L\) has positive diagonal)
  • Cholesky Cost: \(\frac{1}{3}n^3\) flops

Positive Definiteness

  • Sylvester’s Criterion: Symmetric \(A\) is positive definite iff all leading principal minors \(M_k > 0\)

Determinant with Pivoting

  • Row Swap Effect: \(\det(P_{ij}A) = -\det(A)\)
  • Column Swap Effect: \(\det(AP_{ij}) = -\det(A)\)
  • From \(PAQ = LU\): \(\det(A) = \pm \prod_{i} u_{ii}\)

Matrix Inversion

  • 2x2 Inverse: \[\begin{bmatrix} a & b \\ c & d \end{bmatrix}^{-1} = \frac{1}{ad-bc}\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}\]
  • Transpose of Inverse: \((A^{-1})^T = (A^T)^{-1}\)

Inner Products and Norms

  • Dot Product (Inner Product in \(\mathbb{R}^n\)): \(\mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i\)
  • Vector Magnitude (Norm): \(\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}\)
  • Unit Vector: \(\hat{\mathbf{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|}\) (for \(\mathbf{v} \neq \mathbf{0}\))
  • Orthogonality Condition: \(\mathbf{u} \perp \mathbf{v}\) if and only if \(\mathbf{u} \cdot \mathbf{v} = 0\)

Orthogonal Projections

  • Projection of \(\mathbf{v}\) onto \(\mathbf{u}\): \(\text{proj}_{\mathbf{u}}(\mathbf{v}) = \frac{\mathbf{v} \cdot \mathbf{u}}{\mathbf{u} \cdot \mathbf{u}}\mathbf{u}\)
  • Orthogonal Component: \(\mathbf{v} - \text{proj}_{\mathbf{u}}(\mathbf{v})\) is orthogonal to \(\mathbf{u}\)
  • Orthonormal Set Property: \(\mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases}\)

Gram-Schmidt Orthogonalization

  • General Step (\(j = 1, 2, \ldots, n\)): \[\mathbf{u}_j = \mathbf{v}_j - \sum_{i=1}^{j-1} \frac{\mathbf{v}_j \cdot \mathbf{u}_i}{\mathbf{u}_i \cdot \mathbf{u}_i}\mathbf{u}_i, \quad \mathbf{e}_j = \frac{\mathbf{u}_j}{\|\mathbf{u}_j\|}\]
  • First Vector: \(\mathbf{u}_1 = \mathbf{v}_1\), \(\mathbf{e}_1 = \frac{\mathbf{u}_1}{\|\mathbf{u}_1\|}\)
  • Second Vector: \(\mathbf{u}_2 = \mathbf{v}_2 - \frac{\mathbf{v}_2 \cdot \mathbf{u}_1}{\mathbf{u}_1 \cdot \mathbf{u}_1}\mathbf{u}_1\), \(\mathbf{e}_2 = \frac{\mathbf{u}_2}{\|\mathbf{u}_2\|}\)

QR Decomposition

  • QR Factorization: \(A = QR\) where \(Q^T Q = I\) (orthonormal columns) and \(R\) is upper triangular
  • Computing R: \(R = Q^T A\)
  • Dimension of Q: \(m \times n\) matrix (\(m\) rows, \(n\) orthonormal columns)
  • Dimension of R: \(n \times n\) upper triangular matrix with positive diagonal entries

Subspaces and Spaces

  • Column Space (\(\text{Col}(A)\)): Span of columns of \(A\); subspace of \(\mathbb{R}^m\) for \(A \in \mathbb{R}^{m \times n}\)
  • Null Space (\(\text{Nul}(A)\)): \(\{\mathbf{x} \in \mathbb{R}^n \mid A\mathbf{x} = \mathbf{0}\}\); subspace of \(\mathbb{R}^n\)
  • Complete Solution: \(\mathbf{x} = \mathbf{x}_p + \mathbf{x}_n\) where \(\mathbf{x}_p\) is a particular solution and \(\mathbf{x}_n \in \text{Nul}(A)\)

Least Squares

  • Normal Equations: \(A^T A \hat{\mathbf{x}} = A^T \mathbf{b}\)
  • Least-Squares Solution (full column rank): \(\hat{\mathbf{x}} = (A^T A)^{-1} A^T \mathbf{b}\)
  • Pseudoinverse: \(A^+ = (A^T A)^{-1} A^T\)
  • QR Least-Squares System: \(R\hat{\mathbf{x}} = Q^T \mathbf{b}\) (solved by back substitution)

Projection Matrices

  • Projection onto \(\mathcal{C}(A)\): \(\mathbf{p} = A(A^T A)^{-1} A^T \mathbf{b}\)
  • Projection Matrix: \(P = A(A^T A)^{-1} A^T\) (projects onto \(\mathcal{C}(A)\))
  • Projection onto a Line (spanned by \(\mathbf{a}\)): \(P = \dfrac{\mathbf{a}\mathbf{a}^T}{\mathbf{a}^T \mathbf{a}}\)
  • Projection Matrix Properties: \(P^2 = P\) (idempotent), \(P^T = P\) (symmetric)
  • Complementary Projection: \(I - P\) projects onto \(\mathcal{N}(A^T)\)

Eigenvalues and Eigenvectors

  • Eigenvalue-Eigenvector Relationship: \(A\mathbf{v} = \lambda\mathbf{v}\) (defining equation)
  • Characteristic Equation: \(\det(A - \lambda I) = 0\) (equation whose roots are eigenvalues)
  • Finding Eigenvectors: \((A - \lambda I)\mathbf{v} = \mathbf{0}\) (solve this homogeneous system for each eigenvalue)
  • Trace Property: \(\text{tr}(A) = \lambda_1 + \lambda_2 + \cdots + \lambda_n\) (sum of eigenvalues equals trace)
  • Determinant Property: \(\det(A) = \lambda_1 \cdot \lambda_2 \cdots \lambda_n\) (product of eigenvalues equals determinant)
  • Diagonalization: \(A = PDP^{-1}\) where \(P = [\mathbf{v}_1 \ \mathbf{v}_2 \ \ldots \ \mathbf{v}_n]\) (columns are eigenvectors) and \(D = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n)\) (diagonal matrix of eigenvalues)
  • Powers via Diagonalization: \(A^k = PD^kP^{-1}\) where \(D^k = \text{diag}(\lambda_1^k, \lambda_2^k, \ldots, \lambda_n^k)\)
  • Inverse via Diagonalization: \(A^{-1} = PD^{-1}P^{-1}\) where \(D^{-1} = \text{diag}(1/\lambda_1, 1/\lambda_2, \ldots, 1/\lambda_n)\) (valid if \(A\) is invertible)
  • Eigenvalues of Triangular Matrices: If \(A\) is upper or lower triangular, then eigenvalues are the diagonal entries
  • Eigenvalue of Matrix Power: If \(\lambda\) is an eigenvalue of \(A\), then \(\lambda^k\) is an eigenvalue of \(A^k\)
  • Eigenvalue of Inverse: If \(\lambda\) is an eigenvalue of an invertible matrix \(A\), then \(1/\lambda\) is an eigenvalue of \(A^{-1}\)
  • Linear Independence of Eigenvectors: Eigenvectors corresponding to distinct eigenvalues are linearly independent
  • Cayley-Hamilton Theorem: \(p(A) = 0\) where \(p(\lambda) = \det(A - \lambda I)\) is the characteristic polynomial
  • Spectral Mapping Theorem: If \(A\) has eigenvalues \(\lambda_1, \ldots, \lambda_n\) and \(p(x)\) is a polynomial, then \(p(A)\) has eigenvalues \(p(\lambda_1), \ldots, p(\lambda_n)\)
  • Eigenvalues of Symmetric Matrices: If \(A = A^T\) (real symmetric), then all eigenvalues are real numbers
  • Orthogonality of Eigenvectors: For real symmetric \(A\), eigenvectors corresponding to distinct eigenvalues are orthogonal
  • Spectral Decomposition (Symmetric): \(A = Q\Lambda Q^T\) where \(Q\) is orthogonal and \(\Lambda\) is diagonal (eigenvalues on diagonal, columns of \(Q\) are orthonormal eigenvectors)

Similar Matrices

  • Similarity Definition: \(B = P^{-1}AP\) for some invertible matrix \(P\); \(B\) is similar to \(A\)
  • Powers of Similar Matrix: If \(B = P^{-1}AP\), then \(B^k = P^{-1}A^kP\)
  • Invariants Under Similarity: Similar matrices share the same determinant, trace, characteristic polynomial, and eigenvalues
  • Similarity and Diagonalization: \(A\) is diagonalizable iff \(A\) is similar to a diagonal matrix \(D = \text{diag}(\lambda_1, \ldots, \lambda_n)\)
  • Simultaneous Diagonalization: Matrices \(A\) and \(B\) are simultaneously diagonalizable iff \(AB = BA\) (they commute) and both are individually diagonalizable

Complex Numbers

  • Standard Form: \(z = a + bi\) where \(a = \text{Re}(z)\), \(b = \text{Im}(z)\), \(i^2 = -1\)
  • Complex Conjugate: \(\bar{z} = a - bi\); properties: \(\overline{z+w} = \bar{z}+\bar{w}\), \(\overline{zw} = \bar{z}\bar{w}\), \(\overline{z/w} = \bar{z}/\bar{w}\)
  • Modulus: \(|z| = \sqrt{a^2 + b^2} = \sqrt{z\bar{z}}\)
  • Polar Form: \(z = r(\cos\theta + i\sin\theta) = re^{i\theta}\) where \(r = |z|\), \(\theta = \arg(z)\)
  • Euler’s Formula: \(e^{i\theta} = \cos\theta + i\sin\theta\)
  • Product in Polar Form: \(z_1 z_2 = r_1 r_2 e^{i(\theta_1+\theta_2)}\) (multiply moduli, add arguments)
  • De Moivre’s Formula: \((re^{i\theta})^n = r^n e^{in\theta}\)

Complex Inner Product

  • Hermitian Inner Product in \(\mathbb{C}^n\): \(\langle\mathbf{x}, \mathbf{y}\rangle = \mathbf{x}^H\mathbf{y} = \sum_{i=1}^n \overline{x_i} y_i\)
  • Norm in \(\mathbb{C}^n\): \(\|\mathbf{x}\| = \sqrt{\langle\mathbf{x},\mathbf{x}\rangle} = \sqrt{\sum_{i=1}^n |x_i|^2}\)
  • Conjugate Symmetry: \(\langle\mathbf{x}, \mathbf{y}\rangle = \overline{\langle\mathbf{y}, \mathbf{x}\rangle}\)
  • Positive Definiteness: \(\langle\mathbf{x}, \mathbf{x}\rangle = \sum|x_i|^2 \geq 0\), with equality iff \(\mathbf{x} = \mathbf{0}\)
  • Cauchy-Schwarz Inequality: \(|\langle\mathbf{x}, \mathbf{y}\rangle| \leq \|\mathbf{x}\|\|\mathbf{y}\|\)

Conjugate Transpose

  • Definition: \((A^H)_{ij} = \overline{A_{ji}}\); equivalently \(A^H = \overline{A}^T = \overline{A^T}\)
  • Properties: \((A^H)^H = A\), \((AB)^H = B^H A^H\), \((A+B)^H = A^H + B^H\), \((\alpha A)^H = \bar{\alpha} A^H\)
  • Inner Product via Matrix Multiply: \(\langle\mathbf{x}, A\mathbf{y}\rangle = \langle A^H\mathbf{x}, \mathbf{y}\rangle\)

Hermitian and Unitary Matrices

  • Hermitian Matrix: \(A = A^H\) (generalizes real symmetric matrices to \(\mathbb{C}\))
  • Hermitian Eigenvalues: All eigenvalues of a Hermitian matrix are real
  • Hermitian Eigenvectors: Eigenvectors of a Hermitian matrix for distinct eigenvalues are orthogonal (w.r.t. Hermitian inner product)
  • Unitary Matrix: \(U^H U = I\) (equivalently \(U^{-1} = U^H\); generalizes real orthogonal matrices)
  • Unitary Preservation: \(\|U\mathbf{x}\| = \|\mathbf{x}\|\) and \(\langle U\mathbf{x}, U\mathbf{y}\rangle = \langle\mathbf{x},\mathbf{y}\rangle\) for all \(\mathbf{x},\mathbf{y}\)
  • Unitary Eigenvalues: All eigenvalues of a unitary matrix lie on the unit circle (\(|\lambda| = 1\))
  • Unitary Determinant: \(|\det(U)| = 1\)
  • Normal Matrix: \(AA^H = A^HA\) (includes Hermitian and unitary matrices as special cases)
  • Spectral Theorem (Complex): Every Hermitian matrix (and more generally every normal matrix) is unitarily diagonalizable: \(A = U\Lambda U^H\) where \(U\) is unitary and \(\Lambda\) is diagonal with real entries (for Hermitian \(A\))
  • Spectral Decomposition: \(A = \sum_{i=1}^n \lambda_i \mathbf{u}_i \mathbf{u}_i^H\) where \(\mathbf{u}_i\) are orthonormal eigenvectors
  • Four Fundamental Subspaces (Complex): \(\mathcal{C}(A^H)\), \(\mathcal{N}(A)\), \(\mathcal{C}(A)\), \(\mathcal{N}(A^H)\); uses \(A^H\) in place of \(A^T\)